9 research outputs found

    Multi-Resolution Texture Coding for Multi-Resolution 3D Meshes

    Full text link
    We present an innovative system to encode and transmit textured multi-resolution 3D meshes in a progressive way, with no need to send several texture images, one for each mesh LOD (Level Of Detail). All texture LODs are created from the finest one (associated to the finest mesh), but can be re- constructed progressively from the coarsest thanks to refinement images calculated in the encoding process, and transmitted only if needed. This allows us to adjust the LOD/quality of both 3D mesh and texture according to the rendering power of the device that will display them, and to the network capacity. Additionally, we achieve big savings in data transmission by avoiding altogether texture coordinates, which are generated automatically thanks to an unwrapping system agreed upon by both encoder and decoder

    ITEM: Inter-Texture Error Measurement for 3D Meshes

    Get PDF
    We introduce a simple and innovative method to compare any two texture maps, regardless of their sizes, aspect ratios, or even masks, as long as they are both meant to be mapped onto the same 3D mesh. Our system is based on a zero-distortion 3D mesh unwrapping technique which compares two new adapted texture atlases with the same mask but different texel colors, and whose every texel covers the same area in 3D. Once these adapted atlases are created, we measure their difference with ITEM-RMSE, a slightly modified version of the standard RMSE defined for images. ITEM-RMSE is more meaningful and reliable than RMSE because it only takes into account the texels inside the mask, since they are the only ones that will actually be used during rendering. Our method is not only very useful to compare the space efficiency of different texture atlas generation algorithms, but also to quantify texture loss in compression schemes for multi-resolution textured 3D meshes

    3D facial merging for virtual human reconstruction

    Full text link
    There is an increasing need of easy and affordable technologies to automatically generate virtual 3D models from their real counterparts. In particular, 3D human reconstruction has driven the creation of many clever techniques, most of them based on the visual hull (VH) concept. Such techniques do not require expensive hardware; however, they tend to yield 3D humanoids with realistic bodies but mediocre faces, since VH cannot handle concavities. On the other hand, structured light projectors allow to capture very accurate depth data, and thus to reconstruct realistic faces, but they are too expensive to use several of them. We have developed a technique to merge a VH-based 3D mesh of a reconstructed humanoid and the depth data of its face, captured by a single structured light projector. By combining the advantages of both systems in a simple setting, we are able to reconstruct realistic 3D human models with believable faces

    Fast feature matching for detailed point cloud generation

    Get PDF
    Structure from motion is a very popular technique for obtaining three-dimensional point cloud-based reconstructions of objects from un-organised sets of images by analysing the correspondences between feature points detected in those images. However, the point clouds stemming from usual feature point extractors such as SIFT are frequently too sparse for reliable surface recovery. In this paper we show that alternate feature descriptors such as A-KAZE, which provide denser coverage of images, yield better results and more detailed point clouds. Unfortunately, the use of a dramatically increased number of points per image poses a computational challenge. We propose a technique based on epipolar geometry restrictions to significantly cut down on processing time and an efficient implementation thereof on a GPU

    Composition of texture atlases for 3D mesh multi-texturing

    Get PDF
    We introduce an automatic technique for mapping onto a 3D triangle mesh, approximating the shape of a real 3D object, a high resolution texture synthesized from several pictures taken simultaneously by real cameras surrounding the object. We create a texture atlas by first unwrapping the 3D mesh to form a set of 2D patches with no distortion (i.e., the angles and relative sizes of the 3D triangles are preserved in the atlas), and then mixing the color information from the input images, through another three steps: step no. 2 packs the 2D patches so that the bounding canvas of the set is as small as possible; step no. 3 assigns at most one triangle to each canvas pixel; finally, in step no. 4, the color of each pixel is calculated as a smoothly varying weighted average of the corresponding pixels from several input photographs. Our method is especially good for the creation of realistic 3D models without the need of having graphic artists retouch the texture. Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [1.3.7]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and textur

    Face Lift Surgery for Reconstructed Virtual Humans

    No full text
    We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later

    SPLASH: A hybrid 3D modeling/rendering approach mixing splats and meshes

    Full text link
    We propose a hybrid 3D modeling and rendering approach called SPLASH to combine the modeling flexibility and robustness of SPLAts together with the rendering simplicity and maturity of meSHes. Together with this novel SPLASH concept, we also propose a system turning a 3D point cloud, obtained for example through an SfM (Structure from Motion) approach, into a multitextured hybrid 3D model whose shape is described by a triangle mesh plus a collection of elliptical splats

    Textured splat-based point clouds for rendering in handheld device

    Full text link
    We propose a novel technique for modeling and rendering a 3D point cloud obtained from a set of photographs of a real 3D scene as a set of textured elliptical splats. We first obtain the base splat model by calculating, for each point of the cloud, an ellipse approximating locally the underlying surface. We then refine the base model by removing redundant splats to minimize overlaps, and merging splats covering flat regions of the point cloud into larger ellipses. We later apply a multi-texturing process to generate a single texture atlas from the set of photographs, by blending information from multiple cameras for every splat. Finally, we render this multi-textured, splat-based 3D model with an efficient implementation of OpenGL ES 2.0 vertex and fragment shaders which guarantees its fluid display on handheld devices

    Refined facial disparity maps for automatic creation of 3D avatars

    Full text link
    We propose a new method to automatically refine a facial disparity map obtained with standard cameras and under conventional illumination conditions by using a smart combination of traditional computer vision and 3D graphics techniques. Our system inputs two stereo images acquired with standard (calibrated) cameras and uses dense disparity estimation strategies to obtain a coarse initial disparity map, and SIFT to detect and match several feature points in the subjects face. We then use these points as anchors to modify the disparity in the facial area by building a Delaunay triangulation of their convex hull and interpolating their disparity values inside each triangle. We thus obtain a refined disparity map providing a much more accurate representation of the the subjects facial features. This refined facial disparity map may be easily transformed, through the camera calibration parameters, into a depth map to be used, also automatically, to improve the facial mesh of a 3D avatar to match the subjects real human features
    corecore